64 research outputs found

    Alternating Minimization, Scaling Algorithms, and the Null-Cone Problem from Invariant Theory

    Get PDF
    Alternating minimization heuristics seek to solve a (difficult) global optimization task through iteratively solving a sequence of (much easier) local optimization tasks on different parts (or blocks) of the input parameters. While popular and widely applicable, very few examples of this heuristic are rigorously shown to converge to optimality, and even fewer to do so efficiently. In this paper we present a general framework which is amenable to rigorous analysis, and expose its applicability. Its main feature is that the local optimization domains are each a group of invertible matrices, together naturally acting on tensors, and the optimization problem is minimizing the norm of an input tensor under this joint action. The solution of this optimization problem captures a basic problem in Invariant Theory, called the null-cone problem. This algebraic framework turns out to encompass natural computational problems in combinatorial optimization, algebra, analysis, quantum information theory, and geometric complexity theory. It includes and extends to high dimensions the recent advances on (2-dimensional) operator scaling. Our main result is a fully polynomial time approximation scheme for this general problem, which may be viewed as a multi-dimensional scaling algorithm. This directly leads to progress on some of the problems in the areas above, and a unified view of others. We explain how faster convergence of an algorithm for the same problem will allow resolving central open problems. Our main techniques come from Invariant Theory, and include its rich non-commutative duality theory, and new bounds on the bitsizes of coefficients of invariant polynomials. They enrich the algorithmic toolbox of this very computational field of mathematics, and are directly related to some challenges in geometric complexity theory (GCT)

    The Frequent Items Problem in Online Streaming under Various Performance Measures

    Full text link
    In this paper, we strengthen the competitive analysis results obtained for a fundamental online streaming problem, the Frequent Items Problem. Additionally, we contribute with a more detailed analysis of this problem, using alternative performance measures, supplementing the insight gained from competitive analysis. The results also contribute to the general study of performance measures for online algorithms. It has long been known that competitive analysis suffers from drawbacks in certain situations, and many alternative measures have been proposed. However, more systematic comparative studies of performance measures have been initiated recently, and we continue this work, using competitive analysis, relative interval analysis, and relative worst order analysis on the Frequent Items Problem.Comment: IMADA-preprint-c

    Competitive Algorithms for Online Leasing Problem in Probabilistic Environments

    Full text link
    Abstract. We integrate probability distribution into pure competitive analysis to improve the performance measure of competitive analysis, since input sequences of the leasing problem have simple structure and favorably statistical property. Let input structures be the characteristic of geometric distribution, and we obtain optimal on-line algorithms and their competitive ratios. Moreover, the introducing of interest rate would diminish the uncertainty involved in the process of decision making and put off the optimal purchasing date.

    Non-intersecting squared Bessel paths: critical time and double scaling limit

    Get PDF
    We consider the double scaling limit for a model of nn non-intersecting squared Bessel processes in the confluent case: all paths start at time t=0t=0 at the same positive value x=ax=a, remain positive, and are conditioned to end at time t=1t=1 at x=0x=0. After appropriate rescaling, the paths fill a region in the txtx--plane as nn\to \infty that intersects the hard edge at x=0x=0 at a critical time t=tt=t^{*}. In a previous paper (arXiv:0712.1333), the scaling limits for the positions of the paths at time ttt\neq t^{*} were shown to be the usual scaling limits from random matrix theory. Here, we describe the limit as nn\to \infty of the correlation kernel at critical time tt^{*} and in the double scaling regime. We derive an integral representation for the limit kernel which bears some connections with the Pearcey kernel. The analysis is based on the study of a 3×33\times 3 matrix valued Riemann-Hilbert problem by the Deift-Zhou steepest descent method. The main ingredient is the construction of a local parametrix at the origin, out of the solutions of a particular third-order linear differential equation, and its matching with a global parametrix.Comment: 53 pages, 15 figure

    The Value of Information for Populations in Varying Environments

    Full text link
    The notion of information pervades informal descriptions of biological systems, but formal treatments face the problem of defining a quantitative measure of information rooted in a concept of fitness, which is itself an elusive notion. Here, we present a model of population dynamics where this problem is amenable to a mathematical analysis. In the limit where any information about future environmental variations is common to the members of the population, our model is equivalent to known models of financial investment. In this case, the population can be interpreted as a portfolio of financial assets and previous analyses have shown that a key quantity of Shannon's communication theory, the mutual information, sets a fundamental limit on the value of information. We show that this bound can be violated when accounting for features that are irrelevant in finance but inherent to biological systems, such as the stochasticity present at the individual level. This leads us to generalize the measures of uncertainty and information usually encountered in information theory

    Greedy D-Approximation Algorithm for Covering with Arbitrary Constraints and Submodular Cost

    Full text link
    This paper describes a simple greedy D-approximation algorithm for any covering problem whose objective function is submodular and non-decreasing, and whose feasible region can be expressed as the intersection of arbitrary (closed upwards) covering constraints, each of which constrains at most D variables of the problem. (A simple example is Vertex Cover, with D = 2.) The algorithm generalizes previous approximation algorithms for fundamental covering problems and online paging and caching problems
    corecore